Goto

Collaborating Authors

 dual problem


A Unified Kantorovich Duality for Multimarginal Optimal Transport

Cheryala, Yehya, Alaya, Mokhtar Z., Bouzebda, Salim

arXiv.org Machine Learning

Multimarginal optimal transport (MOT) has gained increasing attention in recent years, notably due to its relevance in machine learning and statistics, where one seeks to jointly compare and align multiple probability distributions. This paper presents a unified and complete Kantorovich duality theory for MOT problem on general Polish product spaces with bounded continuous cost function. For marginal compact spaces, the duality identity is derived through a convex-analytic reformulation, that identifies the dual problem as a Fenchel-Rockafellar conjugate. We obtain dual attainment and show that optimal potentials may always be chosen in the class of $c$-conjugate families, thereby extending classical two-marginal conjugacy principle into a genuinely multimarginal setting. In non-compact setting, where direct compactness arguments are unavailable, we recover duality via a truncation-tightness procedure based on weak compactness of multimarginal transference plans and boundedness of the cost. We prove that the dual value is preserved under restriction to compact subsets and that admissible dual families can be regularized into uniformly bounded $c$-conjugate potentials. The argument relies on a refined use of $c$-splitting sets and their equivalence with multimarginal $c$-cyclical monotonicity. We then obtain dual attainment and exact primal-dual equality for MOT on arbitrary Polish spaces, together with a canonical representation of optimal dual potentials by $c$-conjugacy. These results provide a structural foundation for further developments in probabilistic and statistical analysis of MOT, including stability, differentiability, and asymptotic theory under marginal perturbations.






Constrained Reinforcement Learning Has Zero Duality Gap

Santiago Paternain, Luiz Chamon, Miguel Calvo-Fullana, Alejandro Ribeiro

Neural Information Processing Systems

Autonomous agents must often deal with conflicting requirements, such as completing tasks using the least amount of time/energy, learning multiple tasks, or dealing with multiple opponents. In the context of reinforcement learning (RL), these problems are addressed by (i) designing a reward function that simultaneously describes all requirements or (ii) combining modular value functions that encode them individually. Though effective, these methods have critical downsides. Designing good reward functions that balance different objectives is challenging, especially as the number of objectives grows. Moreover, implicit interference between goals may lead to performance plateaus as they compete for resources, particularly when training on-policy. Similarly, selecting parameters to combine value functions is at least as hard as designing an all-encompassing reward, given that the effect of their values on the overall policy is not straightforward.



Response for " Generalized Block-Diagonal Structure Pursuit: Learning Soft Latent Task 1 Assignment against Negative Transfer " 2 ID3136

Neural Information Processing Systems

Response for "Generalized Block-Diagonal Structure Pursuit: Learning Soft Latent T ask We thank all the reviewers for their valuable comments. We have fixed the typos pointed out by the reviewers. Is the framework limited only to linear models? Thm.3, the generalization ability will be promising if the loss is small (not necessarily only the optimal value) and the In this sense, a local critical point would be a good candidate solution. Are the constraints in the Obj included in the class H (L, S, null S, U)?



e 2E all

Neural Information Processing Systems

The language of causal inference provides further intuition for the structure imposed on Problem 3.1 Recall that in Assumption 4.1 imposes that Assumption 4.2, we assume that Then given Proposition B.1, it follows that | P Observe that by Proposition B.1, we have that P However, as we will show in Remark B.4, when strong duality holds for This useful result, which follows from a simple one-line proof in 5.6.2 of [ The idea here is to apply Lemma B.3 for the constant function defined by B.3 Relationship to constrained PAC learning In contrast, the optimization problem in Problem 4.6 contains a family In this appendix, we provide the proofs that were omitted in the main text. Under Assumptions 4.1 and 4.2, Problem 3.1 is equivalent to minimize The main idea in this proof is the following. Finally, we undo our expansion to arrive at at the statement of the proposition. Then, recall that by Assumption 4.2, we have that Now observe that under Assumption 4.1, we have that Under Assumptions 4.1 and 4.2, if we restrict the feasible set to the set of Before proving Proposition 5.2, we formally state the assumptions we require on We make the following assumptions: 1. Given these assumptions, we restate Proposition 5.2 below: Proposition 5.2.